4 research outputs found

    Vandalism on Collaborative Web Communities: An Exploration of Editorial Behaviour in Wikipedia

    Get PDF
    Modern online discussion communities allow people to contribute, sometimes anonymously. Such flexibility sometimes threatens the reputation and reliability of community-owned resources. Such flexibility is understandable, however, they engender threats to the reputation and reliability in collective goods. Since not a lot of previous work addressed these issues it is important to study the aforementioned issues to build an innate understanding of recent ongoing vandalism of Wikipedia pages and ways to preventing those. In this study, we consider the type of activity that the anonymous users carry out on Wikipedia and also contemplate how others react to their activities. In particular, we want to study vandalism of Wikipedia pages and ways of preventing this kind of activity. Our preliminary analysis reveals (~ 90%) of the vandalism or foul edits are done by unregistered users in Wikipedia due to nature of openness. The community reaction seemed to be immediate: most vandalisms were reverted within five minutes on an average. Further analysis shed light on the tolerance of Wikipedia community, reliability of anonymous users revisions and feasibility of early prediction of vandalism

    Exploring the characteristics of abusive behaviour in online social media settings

    Get PDF
    Online abusive behaviour can impact interaction amongst contributors and moderators. It may lead to physical harm or threats. Existing research has not addressed the perception of moderation activity, discussion and disagreement can cause contributors to react aggressively. This thesis investigates the factors that lead to abusive behaviour in conversations within online settings. In particular, empirical analyses were conducted to identify the factors that contribute to abuse in online settings and to distinguish between polite and abusive forms of disagreement. Three contributions were presented in this research to address each to social computing, computational social science and cyber abuse research domains. The analyses suggested that moderators on Reddit view themselves as members of their community and work hard to both guard against violations, but also with contributors to enhance the quality of their content. Moderators also reported the nuances that distinguish polite and abusive disagreement. Furthermore, the analyses revealed that the differences between in-person and online conversations can help identify abusive behaviour. Specifically, the setting of discussion fosters participant behaviours (less hedging, more extreme sentiment, greater willingness to express personal opinion and straying from topic) that are known to increase the likelihood of abusive behaviour. Additionally, the findings revealed how consensus-building factors can influence disagreement in different settings. Finally, we showed how disagreement can be identified and can affect votes based on linguistics contexts. It was shown that different forms of disagreement can be detected better when using specific abuse, politeness and sentiment textual features using models of multi label text classification. The above research findings conceptualised the development of moderation systems to combat online abusive behaviour, based on analysis of the type of disagreement a contribution embodies and other linguistic and behavioural characteristics

    Understanding Abusive Behaviour Between Online and Offline Group Discussions

    Get PDF
    Online discussion platforms can face multiple challenges of abusive behaviour. In order to understand the reasons for persisting such behaviour, we need to understand how users behave inside and outside a community. In this paper, we propose a novel methodology to generate a dataset from offline and online group discussion conversations. We advocate an empirical-based approach to explore the space of abusive behaviour. We conducted a user-study ( N = 15 ) to understand what factors facilitate or amplify forms of behaviour in cases of online conversation that are less likely to be tolerated in face-to-face. The preliminary analysis validates our approach to analyse large-scale conversation dataset

    Public trust and understanding of online content moderation, and its impacts on public discourse

    Full text link
    The dataset is the product of a scoping study which examined the mechanics and rationale of online content moderation, decision-making and its influences and levels of trust and understanding of content moderation policies. First through survey work and focus groups with content moderators, it examined moderation behavior and role perception, decision-making processes and assessments of the effectiveness of moderation policies. Second, through qualitative work with social media users it explored particular models of trust including the way in which trust is reworked in different contexts and its influencing factors, including the alignment of expectations, acceptable losses and gains. A key area of investigation was the degree of comprehensibility of online moderation mechanisms and policies and their impact on trust and credibility. Through a dialogue workshop methodology, moderators and users were brought together to highlight commonalities and differences in respect of expectations and understandings of moderation and how these relate to value-based belief systems and ideas of ethical behaviours. As such, the dataset is made up of two strands: a) transcriptions of five focus groups lasting 1.5 hours with 4-5 participants, two of which are with content moderators (one with Reddit moderators one with moderators from a national news outlet) and the fifth group, a combination of previous focus group participants from the Reddit moderators and social media users (called the dialogue workshop) b) the data from the online survey with a return sample of 218 of Reddit moderators
    corecore